Convergence of Error-driven Ranking Algorithms
نویسنده
چکیده
According to the OT error-driven ranking model of language acquisition, the learner performs a sequence of slight re-rankings triggered by mistakes on the incoming stream of data, until it converges to a ranking that makes no more mistakes. This learning model is very popular in the OT acquisition literature, in particular because it predicts a sequence of rankings that models gradualness in child acquisition. Two implementations of this learning model have been developed in the OT computational literature, namely Tesar and Smolensky’s (1998) Error-Driven Constraint Demotion (EDCD) and Boersma’s (1997) Gradual Learning Algorithm (GLA). Yet, EDCD only performs constraint demotion, and it is thus shown to predict a ranking dynamics too simple from a modeling perspective. The GLA performs both constraint demotion and promotion, but has been shown not to converge. This paper thus develops a complete theory of convergence of error-driven ranking algorithms that perform both constraint demotion and promotion. In particular, it shows that convergent constraint promotion can be achieved (with an error-bound that compares well to that of EDCD) through a proper calibration of the amount by which constraints are promoted.
منابع مشابه
Noise robustness and stochastic tolerance of OT error-driven ranking algorithms
Recent counterexamples show that Harmonic Grammar (HG) error-driven learning (with the classical Perceptron reweighing rule) is not robust to noise and does not tolerate the stochastic implementation (Magri 2014, MS). This article guarantees that no analogous counterexamples are possible for proper Optimality Theory (OT) error-driven learners. In fact, a simple extension of the OT convergence a...
متن کاملTools for the robust analysis of error-driven ranking algorithms and their implications for modelling the child's acquisition of phonotactics
Error-driven ranking algorithms (EDRAs) perform a sequence of slight re-rankings of the constraint set triggered by mistakes on the incoming stream of data. In general, the sequence of rankings entertained by the algorithm, and in particular the final ranking entertained at convergence, depend not only on the grammar the algorithm is trained on, but also on the specific way data are sampled fro...
متن کاملAn Analytical Model for Predicting the Convergence Behavior of the Least Mean Mixed-Norm (LMMN) Algorithm
The Least Mean Mixed-Norm (LMMN) algorithm is a stochastic gradient-based algorithm whose objective is to minimum a combination of the cost functions of the Least Mean Square (LMS) and Least Mean Fourth (LMF) algorithms. This algorithm has inherited many properties and advantages of the LMS and LMF algorithms and mitigated their weaknesses in some ways. The main issue of the LMMN algorithm is t...
متن کاملTwo Novel Learning Algorithms for CMAC Neural Network Based on Changeable Learning Rate
Cerebellar Model Articulation Controller Neural Network is a computational model of cerebellum which acts as a lookup table. The advantages of CMAC are fast learning convergence, and capability of mapping nonlinear functions due to its local generalization of weight updating, single structure and easy processing. In the training phase, the disadvantage of some CMAC models is unstable phenomenon...
متن کاملUniform Convergence, Stability and Learnability for Ranking Problems
Most studies were devoted to the design of efficient algorithms and the evaluation and application on diverse ranking problems, whereas few work has been paid to the theoretical studies on ranking learnability. In this paper, we study the relation between uniform convergence, stability and learnability of ranking. In contrast to supervised learning where the learnability is equivalent to unifor...
متن کامل